The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising
نویسندگان
چکیده
Let X(0) be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y(1),…,y(n) of X(0), where y(i) = Tr(A(T)iX(0)) and each A(i) is an M by N matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem min ||X||*subject to y(i) =Tr(A(T)(i)X) for all 1 ≤ i ≤ n, where || · ||* denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction δ(n,M,N) = n/(MN), rank fraction ρ=rank(X0)/min {M,N}, and aspect ratio β=M/N. Specifically when the measurement matrices Ai have independent standard Gaussian random entries, a curve δ*(ρ) = δ*(ρ;β) exists such that, if δ > δ*(ρ), NNM typically succeeds for large M,N, whereas if δ < δ*(ρ), it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown M by N matrix X(0) is to be estimated based on direct noisy measurements Y =X(0) + Z, where the matrix Z has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem min|| Y-X||(2)(F)/2+λ||X||*. When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error M(ρ;β) = lim(M,N → ∞)inf(λ)sup(rank(X) ≤ ρ · M)MSE(X,X(λ)), where M/N → . We report extensive experiments showing that the phase transition δ*(ρ) in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve M(ρ)=M(ρ;β) in the second problem, matrix denoising in Gaussian noise: δ*(ρ)=M(ρ), for any rank fraction 0 < ρ < 1 (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real M by N matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite N by N matrices, of various ranks.
منابع مشابه
THE PHASE TRANSITION OF MATRIX RECOVERY FROM GAUSSIAN MEASUREMENTS MATCHES THE MINIMAX MSE OF MATRIX DENOISING By
Let X0 be an unknown M by N matrix. In matrix recovery, one takes n < MN linear measurements y1, . . . , yn of X0, where yi = Tr(ai X0) and each ai is a M by N matrix. For measurement matrices with Gaussian i.i.d entries, it known that if X0 is of low rank, it is recoverable from just a few measurements. A popular approach for matrix recovery is Nuclear Norm Minimization (NNM): solving the conv...
متن کاملACCURATE PREDICTION OF PHASE TRANSITIONS IN COMPRESSED SENSING VIA A CONNECTION TO MINIMAX DENOISING By
Compressed sensing posits that, within limits, one can undersample a sparse signal and yet reconstruct it accurately. Knowing the precise limits to such undersampling is important both for theory and practice. We present a formula precisely delineating the allowable degree of of undersampling of generalized sparse objects. The formula applies to Approximate Message Passing (AMP) algorithms for ...
متن کاملMinimax Risk of Matrix Denoising by Singular Value Thresholding
An unknown m by n matrix X0 is to be estimated from noisy measurements Y = X0 + Z, where the noise matrix Z has i.i.d Gaussian entries. A popular matrix denoising scheme solves the nuclear norm penalization problem minX‖Y − X‖F /2 + λ‖X‖∗, where ‖X‖∗ denotes the nuclear norm (sum of singular values). This is the analog, for matrices, of `1 penalization in the vector case. It has been empiricall...
متن کاملCritical behavior and universality classes for an algorithmic phase transition in sparse reconstruction
Optimization problems with sparsity-inducing penalties exhibit sharp algorithmic phase transitions when the numbers of variables tend to infinity, between regimes of ‘good’ and ‘bad’ performance. The nature of these phase transitions and associated universality classes remain incompletely understood. We analyze the mean field equations from the cavity method for two sparse reconstruction algori...
متن کاملTHE NOISE-SENSITIVITY PHASE TRANSITION IN COMPRESSED SENSING By
Consider the noisy underdetermined system of linear equations: y = Ax + z, with n × N measurement matrix A, n < N , and Gaussian white noise z ∼ N(0, σI). Both y and A are known, both x and z are unknown, and we seek an approximation to x. When x has few nonzeros, useful approximations are often obtained by `1-penalized `2 minimization, in which the reconstruction x̂ solves min ‖y −Ax‖2/2 + λ‖x‖...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Proceedings of the National Academy of Sciences of the United States of America
دوره 110 21 شماره
صفحات -
تاریخ انتشار 2013